Variance Reduction Using Nonreversible Langevin Samplers

نویسندگان

  • A B Duncan
  • T Lelièvre
  • G A Pavliotis
چکیده

A standard approach to computing expectations with respect to a given target measure is to introduce an overdamped Langevin equation which is reversible with respect to the target distribution, and to approximate the expectation by a time-averaging estimator. As has been noted in recent papers [30, 37, 61, 72], introducing an appropriately chosen nonreversible component to the dynamics is beneficial, both in terms of reducing the asymptotic variance and of speeding up convergence to the target distribution. In this paper we present a detailed study of the dependence of the asymptotic variance on the deviation from reversibility. Our theoretical findings are supported by numerical simulations.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Using Perturbed Underdamped Langevin Dynamics to Efficiently Sample from Probability Distributions

In this paper we introduce and analyse Langevin samplers that consist of perturbations of the standard underdamped Langevin dynamics. The perturbed dynamics is such that its invariant measure is the same as that of the unperturbed dynamics. We show that appropriate choices of the perturbations can lead to samplers that have improved properties, at least in terms of reducing the asymptotic varia...

متن کامل

Electron. Commun. Probab. 20 (2015), no. 15, DOI: 10.1214/ECP.v20-3855

In recent papers it has been demonstrated that sampling a Gibbs distribution from an appropriate time-irreversible Langevin process is, from several points of view, advantageous when compared to sampling from a time-reversible one. Adding an appropriate irreversible drift to the overdamped Langevin equation results in a larger large deviations rate function for the empirical measure of the proc...

متن کامل

Variance Reduction in Stochastic Gradient Langevin Dynamics

Stochastic gradient-based Monte Carlo methods such as stochastic gradient Langevin dynamics are useful tools for posterior inference on large scale datasets in many machine learning applications. These methods scale to large datasets by using noisy gradients calculated using a mini-batch or subset of the dataset. However, the high variance inherent in these noisy gradients degrades performance ...

متن کامل

Notes on Using Control Variates for Estimation with Reversible MCMC Samplers

A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal coefficients. All methodological and asymptotic arguments are rigorously justified. Numerous MCMC s...

متن کامل

Stochastic Gradient Hamiltonian Monte Carlo with Variance Reduction for Bayesian Inference

Gradient-based Monte Carlo sampling algorithms, like Langevin dynamics and Hamiltonian Monte Carlo, are important methods for Bayesian inference. In large-scale settings, full-gradients are not affordable and thus stochastic gradients evaluated on mini-batches are used as a replacement. In order to reduce the high variance of noisy stochastic gradients, [Dubey et al., 2016] applied the standard...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره 163  شماره 

صفحات  -

تاریخ انتشار 2016